Previous Blogs

October 3, 2017
The Business Challenges of Artificial Intelligence

September 26, 2017
Microsoft Takes Computing to the Extremes

September 19, 2017
What is the Future of Upgrades?

September 12, 2017
It’s Time for Modern Digital Identities

September 5, 2017
The Autonomous Car Charade

August 29, 2017
The Golden Era of Notebooks

August 22, 2017
The Evolution of Smart Speakers

August 15, 2017
The Myth of General Purpose Wearables

August 8, 2017
IoT Connections Made Easy

August 1, 2017
Smarter Computing

July 25, 2017
The Value of Limits

July 18, 2017
Tech in the Heartland

June 27, 2017
Business Realities vs. Tech Dreams

June 20, 2017
The Power of Hidden Tech

June 13, 2017
Computing Evolves from Outside In to Inside Out

June 6, 2017
The Overlooked Surprises of Apple’s WWDC Keynote

May 30, 2017
Are AR and VR Only for Special Occasions?

May 23, 2017
The Digital Car

May 16, 2017
Digital Assistants Drive New Meta-Platform Battle

May 9, 2017
Getting Smart on Smart Speakers

May 5, 2017
Intel Opens High-Tech "Garage"

May 2, 2017
The Hidden Value of Analog

April 28, 2017
Google’s Waymo Starts Driving Passengers

April 25, 2017
The Robotic Future

April 21, 2017
Sony Debuts New Pro Camera

April 18, 2017
Should Apple Build a Car?

April 14, 2017
PC Market Outlook Improving

April 11, 2017
Little Data Analytics

April 7, 2017
Facebook Debuts Free Version of Workplace Collaboration Tool

April 4, 2017
Samsung Building a Platform Without an OS

March 31, 2017
Microsoft Announces Windows 10 Creators Update Release Date

March 28, 2017
Augmented Reality Finally Delivers on 3D Promise

March 24, 2017
Intel Creates AI Organization

March 21, 2017
Chip Magic

March 17, 2017
Microsoft Unveils Teams Chat App

March 14, 2017
Computing on the Edge

March 7, 2017
Cars Need Digital Safety Standards Too

February 28, 2017
The Messy Path to 5G

February 24, 2017
AMD Launches Ryzen CPU

February 21, 2017
Rethinking Wearable Computing

February 17, 2017
Samsung Heir Arrest Unlikely to Impact Sales

February 14, 2017
Modern Workplaces Still More Vision Than Reality

February 10, 2017
Lenovo Develops Energy-Efficient Soldering Technology

February 7, 2017
The Missing Map from Silicon Valley to Main Street

January 31, 2017
The Network vs. The Computer

January 27, 2017
Facebook Adds Support For FIDO Security Keys

January 24, 2017
Voice Drives New Software Paradigm

January 20, 2017
Tesla Cleared of Fault in NHTSA Crash Probe

January 17, 2017
Inside the Mind of a Hacker

January 13, 2017
PC Shipments Stumble but Turnaround is Closer

January 10, 2017
Takeaways from CES 2017

January 3, 2017
Top 10 Tech Predictions for 2017

2016 Blogs

2015 Blogs

2014 Blogs


2013 Blogs

















TECHnalysis Research Blog

October 10, 2017
Edge Computing Could Weaken the Cloud

By Bob O'Donnell

Ask anyone on the business side of the tech industry about the most important development they’ve witnessed over the last decade or so and they’ll invariably say the cloud. After all, it’s the continuously connected, intelligently managed, and nearly unlimited computing capabilities of the cloud that have enabled everything from consumer services like Netflix, to business applications like Salesforce, to social media platforms like Facebook, to online commerce giants like Amazon, to radically transform our business and personal lives. Plus, more than just the centralized storage and computing capabilities for which it’s best known, cloud computing models have also led to radical changes in how software applications are designed, built, managed, monetized and delivered. In short, the cloud has changed nearly everything in tech.

In that light, suggesting that something as powerful and omnipresent as the cloud could start to weaken may border on the naïve. And yet, there are growing signs—perhaps some “fog” on the cloud horizon?—which suggest that’s exactly what’s starting to happen. To be clear, cloud computing, and all the advancements its driven in products, services and processes, isn’t going away, but I do believe we’re starting to see a shift in some areas from the cloud and towards the concept of edge computing.

In edge computing, certain tasks are done closer to the edge or end of the network on client devices, gateways, connected sensors, and other IoT (Internet of Things) gadgets, rather than on the large servers and other infrastructure elements that make up the cloud. From autonomous cars, to connected machines, to new devices like the Intel Movidius VPU (visual processing unit)-powered Google Clips smart camera, we’re seeing an enormous range of new edge computing clients start to hit the market.

While many of these devices are very different in terms of their capabilities, function and purpose, there are several characteristics that unite them. First, most of these devices are designed to take in, analyze, and react to real-time data from the environment around them. Leveraging a range of connected sensors, these edge devices ingest everything from location and temperature data to sound and images (and much more), and then compute an appropriate response, whether that be to slow a car down, provide a preventative maintenance warning, or take a picture when everyone in view is smiling.

The second shared characteristic involves the manner with which this real-time data is analyzed. While many of the edge computing devices have traditional computing components, such as CPUs or ARM-based microcontrollers, they all also have new and different types of processing components—from GPUs, to FPGAs (field programmable gate arrays), to DSPs (digital signal processors), to neural net accelerators, and beyond. In addition, many of these applications use machine learning or artificial intelligence algorithms to analyze the results. It turns out that this hybrid combination of traditional and “newer” types of computing is the most efficient mechanism for performing the new kinds of calculations these applications require.

The third unifying characteristic of edge computing devices gets to the heart of why these kinds of applications are being built independent from or migrated (either partially or completely) from the cloud. They all require the kind of real-time performance, limited latency, and/or security and privacy guarantees that best come from on-device computing. Even with the promise of tremendous increases in broadband network speed and reductions in latency that 5G should bring, it’s never going to replace the kind of immediate response that an autonomous car is going to need when it “sees” and has to respond to an obstacle in front of it. Similarly, if we ever want our interactions with personal-assisted powered devices (ie., those using Alexa, Google Assistant, etc.) to move beyond one question requests and into naturally-flowing, multi-part conversations, some amount of intelligence and capability is going to have to be built into edge devices.

Beyond some of the technical requirements driving growth in edge computing, there are also some larger trends at work. With the tremendously fast growth of the cloud, the pendulum of computing provenance had swung towards the side of centralized resources, much like the early era of mainframe-driven computing. With edge computing, we’re starting to see a new evolution of the client-server era that appeared after mainframes. As with that transition, the move to more distributed computing models doesn’t imply the end of centralized computing elements, but rather a broadening of possible applications. The truth is, edge computing is really about driving a hybrid computing model that combines aspects of the cloud with client-side computing to enable new kinds of applications that either aren’t well-suited or are not possible with a cloud-only approach.

Exactly what some of these new edge applications turn out to be remains to be seen, but it’s clear that we’re at the dawn of an exciting new age for computing and tech in general. Importantly, it’s an era that’s going to drive the growth of new types of products and services, as well as shift the nexus of power amongst tech industry leaders. For those companies that can adapt to the new realities that edge computing models will start to drive over the next several years, it will be exciting times. But for those that can’t—even if they seem nearly invincible today—the potential for becoming a footmark in history could end up being surprisingly real.

Here's a link to the column: https://techpinions.com/edge-computing-could-weaken-the-cloud/51293

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE